14 research outputs found

    Focusing Attention on the Health Aspects of Foods Changes Value Signals in vmPFC and Improves Dietary Choice

    Get PDF
    Attention is thought to play a key role in the computation of stimulus values at the time of choice, which suggests that attention manipulations could be used to improve decision-making in domains where self-control lapses are pervasive. We used an fMRI food choice task with non-dieting human subjects to investigate whether exogenous cues that direct attention to the healthiness of foods could improve dietary choices. Behaviorally, we found that subjects made healthier choices in the presence of health cues. In parallel, stimulus value signals in ventromedial prefrontal cortex were more responsive to the healthiness of foods in the presence of health cues, and this effect was modulated by activity in regions of dorsolateral prefrontal cortex. These findings suggest that the neural mechanisms used in successful self-control can be activated by exogenous attention cues, and provide insights into the processes through which behavioral therapies and public policies could facilitate self-control

    STARC: Structured Annotations for Reading Comprehension

    Full text link
    We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English. We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments. We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability. Our experiments also reveal that the standard multiple choice dataset in NLP, RACE, is limited in its ability to measure reading comprehension. 47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer. OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance.Comment: ACL 2020. OneStopQA dataset, STARC guidelines and human experiments data are available at https://github.com/berzak/onestop-q

    What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision

    Get PDF
    We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest.Comment: To appear in NAACL 201

    Bridging Information-Seeking Human Gaze and Machine Reading Comprehension

    Full text link
    In this work, we analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question, and whether this signal can be beneficial for machine reading comprehension. To this end, we collect a new eye-tracking dataset with a large number of participants engaging in a multiple choice reading comprehension task. Our analysis of this data reveals increased fixation times over parts of the text that are most relevant for answering the question. Motivated by this finding, we propose making automated reading comprehension more human-like by mimicking human information-seeking reading behavior during reading comprehension. We demonstrate that this approach leads to performance gains on multiple choice question answering in English for a state-of-the-art reading comprehension model

    Bootstrap Learning Via Modular Concept Discovery

    Get PDF
    Suppose a learner is faced with a domain of problems about which it knows nearly nothing. It does not know the distribution of problems, the space of solutions is not smooth, and the reward signal is uninformative, providing perhaps a few bits of information but not enough to steer the learner effectively. How can such a learner ever get off the ground? A common intuition is that if the solutions to these problems share a common structure, and the learner can solve some simple problems by brute force, it should be able to extract useful components from these solutions and, by composing them, explore the solution space more efficiently. Here, we formalize this intuition, where the solution space is that of typed functional programs and the gained information is stored as a stochastic grammar over programs. We propose an iterative procedure for exploring such spaces: in the first step of each iteration, the learner explores a finite subset of the domain, guided by a stochastic grammar; in the second step, the learner compresses the successful solutions from the first step to estimate a new stochastic grammar. We test this procedure on symbolic regression and Boolean circuit learning and show that the learner discovers modular concepts for these domains. Whereas the learner is able to solve almost none of the posed problems in the procedure’s first iteration, it rapidly becomes able to solve a large number by gaining abstract knowledge of the structure of the solution space.Engineering and Applied Science

    The drift diffusion model can account for value-based choice response times under high and low time pressure

    Get PDF
    An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM), since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process

    Enriching models of natural language with auxiliary data

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020Manuscript.Includes bibliographical references (pages 81-89).The highest-performing natural language processing models generally solve language tasks by deriving statistical regularities of sequences of arbitrary tokens supplied as training data. Humans have a much richer notion of language, however. For one thing, they understand that language refers to objects aid actions in the real world, which enables them to use language to efficiently transmit instructions on how to accomplish goals. For another, they learn to focus their attention on only those spans of text important for accomplishing the task at hand. ăIn this thesis, we attempt to improve machine models of language by taking inspiration from these aspects of human language. The first half of this thesis concerns understanding instructional "how-to" language, such as "Add remaining flour. Then mix." The meaning is ambiguous without context: Add how much flour to what? Mix what, using what tools, until when? We show how to successfully parse this language by maintaining a distribution over the state of a theoretical kitchen as the instructions are parsed. We also show how to aid interpretation if videos of the task are also available by training a joint vision-language model with over 300,000 Youtube videos on how to cook. The second half discusses taking advantage of people's ability to focus on important parts of a passage in a multiple-choice reading comprehension task to enhance the performance of an automatic question-answering system. We record the gaze location of hundreds of subjects as they read and answer questions about newspaper articles. We then train a state-of-the-art transformer model to predict human attention as well correct answers and find this leads to a substantial boost in performance over merely training the model to predicting correct answers.by Jonathan Matthew Malmaud.Ph. D.Ph. D. Massachusetts Institute of Technology, Department of Brain and Cognitive Science

    Focusing attention on the health aspects of foods changes value signals in vmPFC and improves dietary choice

    Full text link
    Attention is thought to play a key role in the computation of stimulus values at the time of choice, which suggests that attention manipulations could be used to improve decision-making in domains where self-control lapses are pervasive. We used an fMRI food choice task with non-dieting human subjects to investigate whether exogenous cues that direct attention to the healthiness of foods could improve dietary choices. Behaviorally, we found that subjects made healthier choices in the presence of health cues. In parallel, stimulus value signals in ventromedial prefrontal cortex were more responsive to the healthiness of foods in the presence of health cues, and this effect was modulated by activity in regions of dorsolateral prefrontal cortex. These findings suggest that the neural mechanisms used in successful self-control can be activated by exogenous attention cues, and provide insights into the processes through which behavioral therapies and public policies could facilitate self-control

    The Drift Diffusion Model Can Account for the Accuracy and Reaction Time of Value-Based Choices Under High and Low Time Pressure

    No full text
    An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM), since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process
    corecore